ai researcher
Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications
Simmons-Edler, Riley, Dong, Jean, Lushenko, Paul, Rajan, Kanaka, Badman, Ryan P.
Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.
- Research Report (0.64)
- Overview (0.46)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Army (1.00)
The State of AI: Is China about to win the race?
The State of AI: Is China about to win the race? In this conversation, the FT's John Thornhill and MIT Technology Review's Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy. Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. Here in the West, our initial instinct is to focus on America's significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: "Never bet against America." He is right that for more than two centuries, no other "incubator for unleashing human potential" has matched the US.
- North America > United States > California (0.25)
- Asia > China > Beijing > Beijing (0.25)
- North America > United States > Massachusetts (0.05)
- (2 more...)
- Government (0.96)
- Banking & Finance (0.70)
- Information Technology > Services (0.35)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
OpenAI installs parental controls following teen's death
Things to Do in L.A. Tap to enable a layout that focuses on the article. Voice comes from the use of AI. Please report any issues or inconsistencies here . OpenAI will roll out parental controls within the month, allowing parents to link accounts and receive alerts when the system detects "acute distress." The changes follow a California family's lawsuit after their 16-year-old son died by suicide following intimate conversations with ChatGPT about his mental health struggles.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- North America > United States > California > Los Angeles County > Los Angeles (0.06)
- Asia > Middle East > Israel (0.05)
- (7 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
How Trump's policies are affecting early-career scientists--in their own words
How Trump's policies are affecting early-career scientists--in their own words Every year, we recognize extraordinary young researchers on our Innovators Under 35 list. Recent honorees told us how they're faring under the new administration. Every year celebrates accomplished young scientists, entrepreneurs, and inventors from around the world in our Innovators Under 35 list . We've just published the 2025 edition . This year, though, the context is pointedly different: The US scientific community finds itself in an unprecedented position, with the very foundation of its work under attack . Since Donald Trump took office in January, his administration has fired top government scientists, targeted universities individually and academia more broadly, and made substantial funding cuts to the country's science and technology infrastructure .
- Europe > United Kingdom (0.14)
- North America > United States > Texas (0.06)
- North America > United States > Massachusetts (0.04)
- (4 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
Apple's lead AI researcher for robotics heads to Meta as part of latest exits
Apple's lead artificial intelligence researcher for robotics has departed the company to join Meta Platforms' competing effort, part of an exodus of AI talent from the company. The employee, Jian Zhang, joined the Meta Robotics Studio, the social media company confirmed on Tuesday. Separately, three more AI researchers are leaving Apple's in-house large language models team, adding to upheaval in that group, according to people with knowledge of the matter. The latest string of departures -- all taking place over the last week -- includes John Peebles, Nan Du and Zhao Meng, said the sources, who asked not to be identified because the moves haven't been announced.
RelAItionship Building: Analyzing Recruitment Strategies for Participatory AI
Kim, Eugene, Balloli, Vaibhav, Karimian, Berelian, Bondi-Kelly, Elizabeth, Fish, Benjamin
Participatory AI, in which impacted community members and other stakeholders are involved in the design and development of AI systems, holds promise as a way to ensure AI is developed to meet their needs and reflect their values. However, the process of identifying, reaching out, and engaging with all relevant stakeholder groups, which we refer to as recruitment methodology, is still a practical challenge in AI projects striving to adopt participatory practices. In this paper, we investigate the challenges that researchers face when designing and executing recruitment methodology for Participatory AI projects, and the implications of current recruitment practice for Participatory AI. First, we describe the recruitment methodologies used in AI projects using a corpus of 37 projects to capture the diversity of practices in the field and perform an initial analysis on the documentation of recruitment practices, as well as specific strategies that researchers use to meet goals of equity and empowerment. To complement this analysis, we interview five AI researchers to learn about the outcomes of recruitment methodologies. We find that these outcomes are shaped by structural conditions of their work, researchers' own goals and expectations, and the relationships built from the recruitment methodology and subsequent collaboration. Based on these analyses, we provide recommendations for designing and executing relationship-forward recruitment methods, as well as reflexive recruitment documentation practices for Participatory AI researchers.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Michigan (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (19 more...)
- Research Report > New Finding (0.93)
- Personal > Interview (0.93)
- Overview (0.93)
- Information Technology (0.92)
- Social Sector (0.68)
- Law (0.67)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
Privacy-Aware Detection of Fake Identity Documents: Methodology, Benchmark, and Improved Algorithms (FakeIDet2)
Muñoz-Haro, Javier, Tolosana, Ruben, Fierrez, Julian, Vera-Rodriguez, Ruben, Morales, Aythami
Remote user verification in Internet-based applications is becoming increasingly important nowadays. A popular scenario for it consists of submitting a picture of the user's Identity Document (ID) to a service platform, authenticating its veracity, and then granting access to the requested digital service. An ID is well-suited to verify the identity of an individual, since it is government issued, unique, and nontransferable. However, with recent advances in Artificial Intelligence (AI), attackers can surpass security measures in IDs and create very realistic physical and synthetic fake IDs. Researchers are now trying to develop methods to detect an ever-growing number of these AI-based fakes that are almost indistinguishable from authentic (bona fide) IDs. In this counterattack effort, researchers are faced with an important challenge: the difficulty in using real data to train fake ID detectors. This real data scarcity for research and development is originated by the sensitive nature of these documents, which are usually kept private by the ID owners (the users) and the ID Holders (e.g., government, police, bank, etc.). The main contributions of our study are: 1) We propose and discuss a patch-based methodology to preserve privacy in fake ID detection research. 2) We provide a new public database, FakeIDet2-db, comprising over 900K real/fake ID patches extracted from 2,000 ID images, acquired using different smartphone sensors, illumination and height conditions, etc. In addition, three physical attacks are considered: print, screen, and composite. 3) We present a new privacy-aware fake ID detection method, FakeIDet2. 4) We release a standard reproducible benchmark that considers physical and synthetic attacks from popular databases in the literature.
- North America > United States (0.14)
- Europe > Spain > Galicia > Madrid (0.04)
- South America (0.04)
- Asia (0.04)
Toward AI Matching Policies in Homeless Services: A Qualitative Study with Policymakers
Johnston, Caroline M., Koumoundouros, Olga, Hwang, Angel Hsing-Chi, Onasch-Vera, Laura, Rice, Eric, Vayanos, Phebe
Artificial intelligence researchers have proposed various data-driven algorithms to improve the processes that match individuals experiencing homelessness to scarce housing resources. It remains unclear whether and how these algorithms are received or adopted by practitioners and what their corresponding consequences are. Through semi-structured interviews with 13 policymakers in homeless services in Los An-geles, we investigate whether such change-makers are open to the idea of integrating AI into the housing resource matching process, identifying where they see potential gains and drawbacks from such a system in issues of efficiency, fairness, and transparency. Our qualitative analysis indicates that, even when aware of various complicating factors, policymak-ers welcome the idea of an AI matching tool if thoughtfully designed and used in tandem with human decision-makers. Though there is no consensus as to the exact design of such an AI system, insights from policymakers raise open questions and design considerations that can be enlightening for future researchers and practitioners who aim to build responsible algorithmic systems to support decision-making in low-resource scenarios.
- North America > United States > California > Santa Clara County (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.06)
- North America > Canada (0.04)
- Questionnaire & Opinion Survey (0.86)
- Research Report > New Finding (0.68)
- Personal > Interview (0.48)
- Government > Social Services (0.68)
- Government > Regional Government > North America Government > United States Government (0.68)
- Banking & Finance > Real Estate (0.67)
- (4 more...)
IJCAI in Canada: 90-second pitches from the next generation of AI researchers
Ahead of the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025), which will take place in Montréal, Canada, from 16 to 22 August 2025, the Local Arrangements Committee has launched a campaign to showcase the next generation of AI researchers in Canada. Through a series of 90-second videos, we meet students based in Canada and find out a bit about their work. Imane Chafi, PhD candidate at the Polytechnique Montréal, uses AI models to support dentists in designing dental preparations for dental crowns more efficiently and accurately. Liliane-Caroline Demers, Student Communication Coordinator for IJCAI 2025 Local Arrangement Committee and a recent master's graduate from Polytechnique Montréal, researches AI-generated music. Using a neurosymbolic approach that combines machine learning with constraint programming at inference time, she creates music that is both stylistically authentic and structurally coherent.
- North America > Canada > Quebec > Montreal (0.85)
- North America > Canada > Ontario > Toronto (0.26)
- Media > Music (0.60)
- Leisure & Entertainment (0.60)
- Health & Medicine (0.40)
Five ways that AI is learning to improve itself
By the same token, Clune says, automating AI research and development could have enormous upsides. On our own, we humans might not be able to think up the innovations and improvements that will allow AI to one day tackle prodigious problems like cancer and climate change. For now, human ingenuity is still the primary engine of AI advancement; otherwise, Meta would hardly have made such exorbitant offers to attract researchers to its superintelligence lab. But AI is already contributing to its own development, and it's set to take even more of a role in the years to come. Here are five ways that AI is making itself better.